12 research outputs found

    Fixed Boundary Flows

    Full text link
    We consider the fixed boundary flow with canonical interpretability as principal components extended on the non-linear Riemannian manifolds. We aim to find a flow with fixed starting and ending point for multivariate datasets lying on an embedded non-linear Riemannian manifold, differing from the principal flow that starts from the center of the data cloud. Both points are given in advance, using the intrinsic metric on the manifolds. From the perspective of geometry, the fixed boundary flow is defined as an optimal curve that moves in the data cloud. At any point on the flow, it maximizes the inner product of the vector field, which is calculated locally, and the tangent vector of the flow. We call the new flow the fixed boundary flow. The rigorous definition is given by means of an Euler-Lagrange problem, and its solution is reduced to that of a Differential Algebraic Equation (DAE). A high level algorithm is created to numerically compute the fixed boundary. We show that the fixed boundary flow yields a concatenate of three segments, one of which coincides with the usual principal flow when the manifold is reduced to the Euclidean space. We illustrate how the fixed boundary flow can be used and interpreted, and its application in real data

    Flexible semi-parametric quantile regression models

    No full text
    This work involves interquantile identification and variable selection in two semi-parametric quantile regression models, an additive model and an additive coefficient model. In the first part, we investigate the commonality of non-parametric component functions among different quantile levels in additive regression models with fixed dimension. We propose two fused adaptive group LASSO penalties to shrink the difference of functions between neighbouring quantile levels. The proposed methodology is able to simultaneously estimate the non-parametric functions and identify the quantile regions where functions are unvarying, and thus is expected to perform better than standard additive quantile regression when there exists a region of quantile levels on which the functions are unvarying. In the second part, we consider variable selection in quantile additive coefficient models (ACM) with high dimensionality under a sparsity assumption. First, we consider the oracle estimator for quantile ACM when the number of additive coefficient functions is diverging. Then we adopt the SCAD penalty and investigate the non-convex penalized estimator for model estimation and variable selection. Under some regularity conditions, we prove the oracle estimator is a local solution of the SCAD penalized quantile regression problem. Simulation studies and real data applications illustrate that the proposed methods in this thesis yield better numerical results than some existing methods.​Doctor of Philosophy (SPMS

    Quantile regression for additive coefficient models in high dimensions

    No full text
    In this paper, we consider quantile regression in additive coefficient models (ACM) with high dimensionality under a sparsity assumption and approximate the additive coefficient functions by B-spline expansion. First, we consider the oracle estimator for quantile ACM when the number of additive coefficient functions is diverging. Then we adopt the SCAD penalty and investigate the non-convex penalized estimator for model estimation and variable selection. Under some regularity conditions, we prove that the oracle estimator is a local solution of the SCAD penalized quantile regression problem. Simulation studies and an application to a genome-wide association study show that the proposed method yields good numerical results

    Interquantile shrinkage in additive models

    No full text
    <p>In this paper, we investigate the commonality of nonparametric component functions among different quantile levels in additive regression models. We propose two fused adaptive group Least Absolute Shrinkage and Selection Operator penalties to shrink the difference of functions between neighbouring quantile levels. The proposed methodology is able to simultaneously estimate the nonparametric functions and identify the quantile regions where functions are unvarying, and thus is expected to perform better than standard additive quantile regression when there exists a region of quantile levels on which the functions are unvarying. Under some regularity conditions, the proposed penalised estimators can theoretically achieve the optimal rate of convergence and identify the true varying/unvarying regions consistently. Simulation studies and a real data application show that the proposed methods yield good numerical results.</p

    Simultaneous estimation of linear conditional quantiles with penalized splines

    No full text
    We consider smooth estimation of the conditional quantile process in linear models using penalized splines. For linear quantile regression problems, usually separate models are fitted at a finite number of quantile levels and then information from different quantiles is combined in interpreting the results. We propose a smoothing method based on penalized splines that computes the conditional quantiles all at the same time. We consider both fixed-knots and increasing-knots asymptotics of the estimator and show that it converges to a multivariate Gaussian process. Simulations show that smoothing can result in more accurate estimation of the conditional quantiles. The method is further illustrated on a real data set. Empirically (although not theoretically) we observe that the crossing quantile curves problem can often disappear using the smoothed estimator.21 page(s

    Simultaneous estimation of linear conditional quantiles with penalized splines

    No full text
    We consider smooth estimation of the conditional quantile process in linear models using penalized splines. For linear quantile regression problems, usually separate models are fitted at a finite number of quantile levels and then information from different quantiles is combined in interpreting the results. We propose a smoothing method based on penalized splines that computes the conditional quantiles all at the same time. We consider both fixed-knots and increasing-knots asymptotics of the estimator and show that it converges to a multivariate Gaussian process. Simulations show that smoothing can result in more accurate estimation of the conditional quantiles. The method is further illustrated on a real data set. Empirically (although not theoretically) we observe that the crossing quantile curves problem can often disappear using the smoothed estimator

    Interquantile shrinkage in additive models

    No full text
    corecore